24 research outputs found

    A NONMONOTONE MEMORY GRADIENT METHOD FOR UNCONSTRAINED OPTIMIZATION

    Get PDF
    Abstract Memory gradient methods are used for unconstrained optimization, especially large scale problems. They were first proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. In this paper, we propose a nonmonotone memory gradient method based on this work. We show that our method converges globally to the solution. Our numerical results show that the proposed method is efficient for some standard test problems if we choose a parameter included in the method suitably

    Inexact proximal DC Newton-type method for nonconvex composite functions

    Full text link
    We consider a class of difference-of-convex (DC) optimization problems where the objective function is the sum of a smooth function and a possible nonsmooth DC function. The application of proximal DC algorithms to address this problem class is well-known. In this paper, we combine a proximal DC algorithm with an inexact proximal Newton-type method to propose an inexact proximal DC Newton-type method. We demonstrate global convergence properties of the proposed method. In addition, we give a memoryless quasi-Newton matrix for scaled proximal mappings and consider a two-dimensional system of semi-smooth equations that arise in calculating scaled proximal mappings. To efficiently obtain the scaled proximal mappings, we adopt a semi-smooth Newton method to inexactly solve the system. Finally, we present some numerical experiments to investigate the efficiency of the proposed method, showing that the proposed method outperforms existing methods

    A Three-Term Conjugate Gradient Method with Sufficient Descent Property for Unconstrained Optimization

    Get PDF
    Conjugate gradient methods are widely used for solving large-scale unconstrained optimization problems, because they do not need the storage of matrices. In this paper, we propose a general form of three-term conjugate gradient methods which always generate a sufficient descent direction. We give a sufficient condition for the global convergence of the proposed general method. Moreover, we present a specific three-term conjugate gradient method based on the multi-step quasi-Newton method. Finally, some numerical results of the proposed method are given
    corecore